Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available August 18, 2026
-
Abstract Engineering design is undergoing a transformative shift with the advent of AI, marking a new era in how we approach product, system, and service planning. Large language models have demonstrated impressive capabilities in enabling this shift. Yet, with text as their only input modality, they cannot leverage the large body of visual artifacts that engineers have used for centuries and are accustomed to. This gap is addressed with the release of multimodal vision-language models (VLMs), such as GPT-4V, enabling AI to impact many more types of tasks. Our work presents a comprehensive evaluation of VLMs across a spectrum of engineering design tasks, categorized into four main areas: Conceptual Design, System-Level and Detailed Design, Manufacturing and Inspection, and Engineering Education Tasks. Specifically in this paper, we assess the capabilities of two VLMs, GPT-4V and LLaVA 1.6 34B, in design tasks such as sketch similarity analysis, CAD generation, topology optimization, manufacturability assessment, and engineering textbook problems. Through this structured evaluation, we not only explore VLMs’ proficiency in handling complex design challenges but also identify their limitations in complex engineering design applications. Our research establishes a foundation for future assessments of vision language models. It also contributes a set of benchmark testing datasets, with more than 1000 queries, for ongoing advancements and applications in this field.more » « lessFree, publicly-accessible full text available September 1, 2026
-
Abstract Sketch2Prototype is an AI-based framework that transforms a hand-drawn sketch into a diverse set of 2D images and 3D prototypes through sketch-to-text, text-to-image, and image-to-3D stages. This framework, shown across various sketches, rapidly generates text, image, and 3D modalities for enhanced early-stage design exploration. We show that using text as an intermediate modality outperforms direct sketch-to-3D baselines for generating diverse and manufacturable 3D models. We find limitations in current image-to-3D techniques, while noting the value of the text modality for user-feedback.more » « less
-
null (Ed.)The emergence of soft robots has presented new challenges associated with controlling the underlying fluidics of such systems. Here, we introduce a strategy for additively manufacturing unified soft robots comprising fully integrated fluidic circuitry in a single print run via PolyJet three-dimensional (3D) printing. We explore the efficacy of this approach for soft robots designed to leverage novel 3D fluidic circuit elements—e.g., fluidic diodes, “normally closed” transistors, and “normally open” transistors with geometrically tunable pressure-gain functionalities—to operate in response to fluidic analogs of conventional electronic signals, including constant-flow [“direct current (DC)”], “alternating current (AC)”–inspired, and preprogrammed aperiodic (“variable current”) input conditions. By enabling fully integrated soft robotic entities (composed of soft actuators, fluidic circuitry, and body features) to be rapidly disseminated, modified on demand, and 3D-printed in a single run, the presented design and additive manufacturing strategy offers unique promise to catalyze new classes of soft robots.more » « less
An official website of the United States government
